From patchwork Wed Oct 13 05:56:19 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srujana Challa X-Patchwork-Id: 12554893 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 808B5C433F5 for ; Wed, 13 Oct 2021 05:56:36 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 4F12660FDC for ; Wed, 13 Oct 2021 05:56:36 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229770AbhJMF6i (ORCPT ); Wed, 13 Oct 2021 01:58:38 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:37530 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229582AbhJMF6h (ORCPT ); Wed, 13 Oct 2021 01:58:37 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 19CMf7bS022197; Tue, 12 Oct 2021 22:56:31 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=PUg1PjASSHITIi7066wcw0fzWlzO1TWpDOzU6QqgrK0=; b=PpsVx1R+0a1wnONVTaA/FKsgBD7sttXBfq8drMNRD30kS6ZpNbRfODRf0popk+sUt1bI f8Q0H1TiKWWhnCvkAqG9vYZ2/OZ1fyuUIBzMLz1W4ISx0q8gNIXGSAFPH+MKc2r66is9 Q8GeI1JCMzUFMROkmkGw/KuvTnuwO0JhZJzDeqbbYVfkBXIX9t+cYVYpiCfiGw7Frlyz m6RUGnA4+8fyU16U6inXF3wqt9yJKJ3xsFP3YbKqti3Sghf3F99qEvqa2zmbhxcTYdEK ksEDuGKr7SLy7Z04faz0kgNNRnKK6Qk9syf3c6KcdBZLoggvYvq8cGlACzCs9+aoSI+O oQ== Received: from dc5-exch02.marvell.com ([199.233.59.182]) by mx0b-0016f401.pphosted.com with ESMTP id 3bnkcchhnu-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 12 Oct 2021 22:56:31 -0700 Received: from DC5-EXCH01.marvell.com (10.69.176.38) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 12 Oct 2021 22:56:29 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Tue, 12 Oct 2021 22:56:29 -0700 Received: from localhost.localdomain (unknown [10.28.36.175]) by maili.marvell.com (Postfix) with ESMTP id 39F1B3F7062; Tue, 12 Oct 2021 22:56:25 -0700 (PDT) From: Srujana Challa To: , , CC: , , , , , , , kernel test robot Subject: [PATCH v2 net-next 1/3] octeontx2-af: Enable CPT HW interrupts Date: Wed, 13 Oct 2021 11:26:19 +0530 Message-ID: <20211013055621.1812301-2-schalla@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211013055621.1812301-1-schalla@marvell.com> References: <20211013055621.1812301-1-schalla@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: vT0cjmKD_-T82y9i7bvgtCBIncYIxO6y X-Proofpoint-GUID: vT0cjmKD_-T82y9i7bvgtCBIncYIxO6y X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.0.607.475 definitions=2021-10-13_01,2021-10-13_01,2020-04-07_01 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org This patch enables and registers interrupt handler for CPT HW interrupts. Signed-off-by: Srujana Challa Reported-by: kernel test robot --- .../net/ethernet/marvell/octeontx2/af/rvu.c | 13 + .../net/ethernet/marvell/octeontx2/af/rvu.h | 3 + .../ethernet/marvell/octeontx2/af/rvu_cpt.c | 230 ++++++++++++++++++ .../marvell/octeontx2/af/rvu_struct.h | 18 ++ 4 files changed, 264 insertions(+) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c index 0a1e9f6e216a..7698a67f6465 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c @@ -854,6 +854,7 @@ static int rvu_setup_nix_hw_resource(struct rvu *rvu, int blkaddr) block->lfcfg_reg = NIX_PRIV_LFX_CFG; block->msixcfg_reg = NIX_PRIV_LFX_INT_CFG; block->lfreset_reg = NIX_AF_LF_RST; + block->rvu = rvu; sprintf(block->name, "NIX%d", blkid); rvu->nix_blkaddr[blkid] = blkaddr; return rvu_alloc_bitmap(&block->lf); @@ -883,6 +884,7 @@ static int rvu_setup_cpt_hw_resource(struct rvu *rvu, int blkaddr) block->lfcfg_reg = CPT_PRIV_LFX_CFG; block->msixcfg_reg = CPT_PRIV_LFX_INT_CFG; block->lfreset_reg = CPT_AF_LF_RST; + block->rvu = rvu; sprintf(block->name, "CPT%d", blkid); return rvu_alloc_bitmap(&block->lf); } @@ -940,6 +942,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu) block->lfcfg_reg = NPA_PRIV_LFX_CFG; block->msixcfg_reg = NPA_PRIV_LFX_INT_CFG; block->lfreset_reg = NPA_AF_LF_RST; + block->rvu = rvu; sprintf(block->name, "NPA"); err = rvu_alloc_bitmap(&block->lf); if (err) { @@ -979,6 +982,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu) block->lfcfg_reg = SSO_PRIV_LFX_HWGRP_CFG; block->msixcfg_reg = SSO_PRIV_LFX_HWGRP_INT_CFG; block->lfreset_reg = SSO_AF_LF_HWGRP_RST; + block->rvu = rvu; sprintf(block->name, "SSO GROUP"); err = rvu_alloc_bitmap(&block->lf); if (err) { @@ -1003,6 +1007,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu) block->lfcfg_reg = SSOW_PRIV_LFX_HWS_CFG; block->msixcfg_reg = SSOW_PRIV_LFX_HWS_INT_CFG; block->lfreset_reg = SSOW_AF_LF_HWS_RST; + block->rvu = rvu; sprintf(block->name, "SSOWS"); err = rvu_alloc_bitmap(&block->lf); if (err) { @@ -1028,6 +1033,7 @@ static int rvu_setup_hw_resources(struct rvu *rvu) block->lfcfg_reg = TIM_PRIV_LFX_CFG; block->msixcfg_reg = TIM_PRIV_LFX_INT_CFG; block->lfreset_reg = TIM_AF_LF_RST; + block->rvu = rvu; sprintf(block->name, "TIM"); err = rvu_alloc_bitmap(&block->lf); if (err) { @@ -2724,6 +2730,8 @@ static void rvu_unregister_interrupts(struct rvu *rvu) { int irq; + rvu_cpt_unregister_interrupts(rvu); + /* Disable the Mbox interrupt */ rvu_write64(rvu, BLKADDR_RVUM, RVU_AF_PFAF_MBOX_INT_ENA_W1C, INTR_MASK(rvu->hw->total_pfs) & ~1ULL); @@ -2933,6 +2941,11 @@ static int rvu_register_interrupts(struct rvu *rvu) goto fail; } rvu->irq_allocated[offset] = true; + + ret = rvu_cpt_register_interrupts(rvu); + if (ret) + goto fail; + return 0; fail: diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index 58b166698fa5..cdbd2846127d 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -101,6 +101,7 @@ struct rvu_block { u64 msixcfg_reg; u64 lfreset_reg; unsigned char name[NAME_SIZE]; + struct rvu *rvu; }; struct nix_mcast { @@ -812,6 +813,8 @@ bool is_mcam_entry_enabled(struct rvu *rvu, struct npc_mcam *mcam, int blkaddr, int index); /* CPT APIs */ +int rvu_cpt_register_interrupts(struct rvu *rvu); +void rvu_cpt_unregister_interrupts(struct rvu *rvu); int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int lf, int slot); /* CN10K RVU */ diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c index 267d092b8e97..259131435edc 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c @@ -37,6 +37,236 @@ (_rsp)->free_sts_##etype = free_sts; \ }) +static irqreturn_t rvu_cpt_af_flt_intr_handler(int irq, void *ptr) +{ + struct rvu_block *block = ptr; + struct rvu *rvu = block->rvu; + int blkaddr = block->addr; + u64 reg0, reg1, reg2; + + reg0 = rvu_read64(rvu, blkaddr, CPT_AF_FLTX_INT(0)); + reg1 = rvu_read64(rvu, blkaddr, CPT_AF_FLTX_INT(1)); + if (!is_rvu_otx2(rvu)) { + reg2 = rvu_read64(rvu, blkaddr, CPT_AF_FLTX_INT(2)); + dev_err_ratelimited(rvu->dev, + "Received CPTAF FLT irq : 0x%llx, 0x%llx, 0x%llx", + reg0, reg1, reg2); + } else { + dev_err_ratelimited(rvu->dev, + "Received CPTAF FLT irq : 0x%llx, 0x%llx", + reg0, reg1); + } + + rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT(0), reg0); + rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT(1), reg1); + if (!is_rvu_otx2(rvu)) + rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT(2), reg2); + + return IRQ_HANDLED; +} + +static irqreturn_t rvu_cpt_af_rvu_intr_handler(int irq, void *ptr) +{ + struct rvu_block *block = ptr; + struct rvu *rvu = block->rvu; + int blkaddr = block->addr; + u64 reg; + + reg = rvu_read64(rvu, blkaddr, CPT_AF_RVU_INT); + dev_err_ratelimited(rvu->dev, "Received CPTAF RVU irq : 0x%llx", reg); + + rvu_write64(rvu, blkaddr, CPT_AF_RVU_INT, reg); + return IRQ_HANDLED; +} + +static irqreturn_t rvu_cpt_af_ras_intr_handler(int irq, void *ptr) +{ + struct rvu_block *block = ptr; + struct rvu *rvu = block->rvu; + int blkaddr = block->addr; + u64 reg; + + reg = rvu_read64(rvu, blkaddr, CPT_AF_RAS_INT); + dev_err_ratelimited(rvu->dev, "Received CPTAF RAS irq : 0x%llx", reg); + + rvu_write64(rvu, blkaddr, CPT_AF_RAS_INT, reg); + return IRQ_HANDLED; +} + +static int rvu_cpt_do_register_interrupt(struct rvu_block *block, int irq_offs, + irq_handler_t handler, + const char *name) +{ + struct rvu *rvu = block->rvu; + int ret; + + ret = request_irq(pci_irq_vector(rvu->pdev, irq_offs), handler, 0, + name, block); + if (ret) { + dev_err(rvu->dev, "RVUAF: %s irq registration failed", name); + return ret; + } + + WARN_ON(rvu->irq_allocated[irq_offs]); + rvu->irq_allocated[irq_offs] = true; + return 0; +} + +static void cpt_10k_unregister_interrupts(struct rvu_block *block, int off) +{ + struct rvu *rvu = block->rvu; + int blkaddr = block->addr; + int i; + + /* Disable all CPT AF interrupts */ + for (i = 0; i < CPT_10K_AF_INT_VEC_RVU; i++) + rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(i), 0x1); + rvu_write64(rvu, blkaddr, CPT_AF_RVU_INT_ENA_W1C, 0x1); + rvu_write64(rvu, blkaddr, CPT_AF_RAS_INT_ENA_W1C, 0x1); + + for (i = 0; i < CPT_10K_AF_INT_VEC_CNT; i++) + if (rvu->irq_allocated[off + i]) { + free_irq(pci_irq_vector(rvu->pdev, off + i), block); + rvu->irq_allocated[off + i] = false; + } +} + +static void cpt_unregister_interrupts(struct rvu *rvu, int blkaddr) +{ + struct rvu_hwinfo *hw = rvu->hw; + struct rvu_block *block; + int i, offs; + + if (!is_block_implemented(rvu->hw, blkaddr)) + return; + offs = rvu_read64(rvu, blkaddr, CPT_PRIV_AF_INT_CFG) & 0x7FF; + if (!offs) { + dev_warn(rvu->dev, + "Failed to get CPT_AF_INT vector offsets\n"); + return; + } + block = &hw->block[blkaddr]; + if (!is_rvu_otx2(rvu)) + return cpt_10k_unregister_interrupts(block, offs); + + /* Disable all CPT AF interrupts */ + for (i = 0; i < CPT_AF_INT_VEC_RVU; i++) + rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1C(i), 0x1); + rvu_write64(rvu, blkaddr, CPT_AF_RVU_INT_ENA_W1C, 0x1); + rvu_write64(rvu, blkaddr, CPT_AF_RAS_INT_ENA_W1C, 0x1); + + for (i = 0; i < CPT_AF_INT_VEC_CNT; i++) + if (rvu->irq_allocated[offs + i]) { + free_irq(pci_irq_vector(rvu->pdev, offs + i), block); + rvu->irq_allocated[offs + i] = false; + } +} + +void rvu_cpt_unregister_interrupts(struct rvu *rvu) +{ + cpt_unregister_interrupts(rvu, BLKADDR_CPT0); + cpt_unregister_interrupts(rvu, BLKADDR_CPT1); +} + +static int cpt_10k_register_interrupts(struct rvu_block *block, int off) +{ + struct rvu *rvu = block->rvu; + int blkaddr = block->addr; + char irq_name[16]; + int i, ret; + + for (i = CPT_10K_AF_INT_VEC_FLT0; i < CPT_10K_AF_INT_VEC_RVU; i++) { + snprintf(irq_name, sizeof(irq_name), "CPTAF FLT%d", i); + ret = rvu_cpt_do_register_interrupt(block, off + i, + rvu_cpt_af_flt_intr_handler, + irq_name); + if (ret) + goto err; + rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1S(i), 0x1); + } + + ret = rvu_cpt_do_register_interrupt(block, off + CPT_10K_AF_INT_VEC_RVU, + rvu_cpt_af_rvu_intr_handler, + "CPTAF RVU"); + if (ret) + goto err; + rvu_write64(rvu, blkaddr, CPT_AF_RVU_INT_ENA_W1S, 0x1); + + ret = rvu_cpt_do_register_interrupt(block, off + CPT_10K_AF_INT_VEC_RAS, + rvu_cpt_af_ras_intr_handler, + "CPTAF RAS"); + if (ret) + goto err; + rvu_write64(rvu, blkaddr, CPT_AF_RAS_INT_ENA_W1S, 0x1); + + return 0; +err: + rvu_cpt_unregister_interrupts(rvu); + return ret; +} + +static int cpt_register_interrupts(struct rvu *rvu, int blkaddr) +{ + struct rvu_hwinfo *hw = rvu->hw; + struct rvu_block *block; + int i, offs, ret = 0; + char irq_name[16]; + + if (!is_block_implemented(rvu->hw, blkaddr)) + return 0; + + block = &hw->block[blkaddr]; + offs = rvu_read64(rvu, blkaddr, CPT_PRIV_AF_INT_CFG) & 0x7FF; + if (!offs) { + dev_warn(rvu->dev, + "Failed to get CPT_AF_INT vector offsets\n"); + return 0; + } + + if (!is_rvu_otx2(rvu)) + return cpt_10k_register_interrupts(block, offs); + + for (i = CPT_AF_INT_VEC_FLT0; i < CPT_AF_INT_VEC_RVU; i++) { + snprintf(irq_name, sizeof(irq_name), "CPTAF FLT%d", i); + ret = rvu_cpt_do_register_interrupt(block, offs + i, + rvu_cpt_af_flt_intr_handler, + irq_name); + if (ret) + goto err; + rvu_write64(rvu, blkaddr, CPT_AF_FLTX_INT_ENA_W1S(i), 0x1); + } + + ret = rvu_cpt_do_register_interrupt(block, offs + CPT_AF_INT_VEC_RVU, + rvu_cpt_af_rvu_intr_handler, + "CPTAF RVU"); + if (ret) + goto err; + rvu_write64(rvu, blkaddr, CPT_AF_RVU_INT_ENA_W1S, 0x1); + + ret = rvu_cpt_do_register_interrupt(block, offs + CPT_AF_INT_VEC_RAS, + rvu_cpt_af_ras_intr_handler, + "CPTAF RAS"); + if (ret) + goto err; + rvu_write64(rvu, blkaddr, CPT_AF_RAS_INT_ENA_W1S, 0x1); + + return 0; +err: + rvu_cpt_unregister_interrupts(rvu); + return ret; +} + +int rvu_cpt_register_interrupts(struct rvu *rvu) +{ + int ret; + + ret = cpt_register_interrupts(rvu, BLKADDR_CPT0); + if (ret) + return ret; + + return cpt_register_interrupts(rvu, BLKADDR_CPT1); +} + static int get_cpt_pf_num(struct rvu *rvu) { int i, domain_nr, cpt_pf_num = -1; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h index 77ac96693f04..edc9367b1b95 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_struct.h @@ -62,6 +62,24 @@ enum rvu_af_int_vec_e { RVU_AF_INT_VEC_CNT = 0x5, }; +/* CPT Admin function Interrupt Vector Enumeration */ +enum cpt_af_int_vec_e { + CPT_AF_INT_VEC_FLT0 = 0x0, + CPT_AF_INT_VEC_FLT1 = 0x1, + CPT_AF_INT_VEC_RVU = 0x2, + CPT_AF_INT_VEC_RAS = 0x3, + CPT_AF_INT_VEC_CNT = 0x4, +}; + +enum cpt_10k_af_int_vec_e { + CPT_10K_AF_INT_VEC_FLT0 = 0x0, + CPT_10K_AF_INT_VEC_FLT1 = 0x1, + CPT_10K_AF_INT_VEC_FLT2 = 0x2, + CPT_10K_AF_INT_VEC_RVU = 0x3, + CPT_10K_AF_INT_VEC_RAS = 0x4, + CPT_10K_AF_INT_VEC_CNT = 0x5, +}; + /* NPA Admin function Interrupt Vector Enumeration */ enum npa_af_int_vec_e { NPA_AF_INT_VEC_RVU = 0x0, From patchwork Wed Oct 13 05:56:20 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srujana Challa X-Patchwork-Id: 12554895 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 15945C433F5 for ; Wed, 13 Oct 2021 05:56:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id F19196023E for ; Wed, 13 Oct 2021 05:56:38 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230396AbhJMF6k (ORCPT ); Wed, 13 Oct 2021 01:58:40 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:50002 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229708AbhJMF6k (ORCPT ); Wed, 13 Oct 2021 01:58:40 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 19CMf8Hh022212; Tue, 12 Oct 2021 22:56:35 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=BkyeVQ0Ss3jqGfJtr+krb9f66iacEMjN3JEe7qhUHvg=; b=WaAxlg0V1qWm3JgGedCqeXxlcgeeUgSBRU4KdvIbQp2chuLz+NyQb7SQXxRE/UPF7XK0 3T3hMuBBaIpvq6Y6cvi5hxc+RGKZ1Up3atKkOz/PCVfjRl1gChldXugv/IBQ+NDfqJ7R IzzaWK5vIYpZXUY18fkgdrmRHbUrfQ5i8C0avC1PNfr+J0sjqA6oLWk7P7tLfRHUGz1h IZ5pi0I8GAEdCR06L8fbb7gm8LPb3vWxUh/GLagdghmtKTHPVIwOZTtuGE6wOF2eRYe2 Y4qdoDn/DKfHs095A0iD9nQgpraPJANj7MpCpkOIEGggwStiWLFhjKviS0Us8HO0wYtM YA== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3bnkcchhpe-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 12 Oct 2021 22:56:35 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 12 Oct 2021 22:56:33 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Tue, 12 Oct 2021 22:56:33 -0700 Received: from localhost.localdomain (unknown [10.28.36.175]) by maili.marvell.com (Postfix) with ESMTP id E64083F7045; Tue, 12 Oct 2021 22:56:29 -0700 (PDT) From: Srujana Challa To: , , CC: , , , , , , , Nithin Dabilpuram Subject: [PATCH v2 net-next 2/3] octeontx2-af: Perform cpt lf teardown in non FLR path Date: Wed, 13 Oct 2021 11:26:20 +0530 Message-ID: <20211013055621.1812301-3-schalla@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211013055621.1812301-1-schalla@marvell.com> References: <20211013055621.1812301-1-schalla@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: zURnfMOg3hU7w0oYP8t006jRRV0RxCvW X-Proofpoint-GUID: zURnfMOg3hU7w0oYP8t006jRRV0RxCvW X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.0.607.475 definitions=2021-10-13_01,2021-10-13_01,2020-04-07_01 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org From: Nithin Dabilpuram Perform CPT LF teardown in non FLR path as well via cpt_lf_free() Currently CPT LF teardown and reset sequence is only done when FLR is handled with CPT LF still attached. This patch also fixes cpt_lf_alloc() to set EXEC_LDWB in CPT_AF_LFX_CTL2 when being completely overwritten as that is the default value and is better for performance. Signed-off-by: Nithin Dabilpuram --- .../net/ethernet/marvell/octeontx2/af/rvu.c | 3 +- .../net/ethernet/marvell/octeontx2/af/rvu.h | 3 +- .../ethernet/marvell/octeontx2/af/rvu_cpt.c | 32 +++++++++++-------- 3 files changed, 22 insertions(+), 16 deletions(-) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c index 7698a67f6465..cb56e171ddd4 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.c @@ -2532,7 +2532,8 @@ static void rvu_blklf_teardown(struct rvu *rvu, u16 pcifunc, u8 blkaddr) rvu_npa_lf_teardown(rvu, pcifunc, lf); else if ((block->addr == BLKADDR_CPT0) || (block->addr == BLKADDR_CPT1)) - rvu_cpt_lf_teardown(rvu, pcifunc, lf, slot); + rvu_cpt_lf_teardown(rvu, pcifunc, block->addr, lf, + slot); err = rvu_lf_reset(rvu, block, lf); if (err) { diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index cdbd2846127d..75aa0b8cfe58 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -815,7 +815,8 @@ bool is_mcam_entry_enabled(struct rvu *rvu, struct npc_mcam *mcam, int blkaddr, /* CPT APIs */ int rvu_cpt_register_interrupts(struct rvu *rvu); void rvu_cpt_unregister_interrupts(struct rvu *rvu); -int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int lf, int slot); +int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf, + int slot); /* CN10K RVU */ int rvu_set_channels_base(struct rvu *rvu); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c index 259131435edc..7bab20403bba 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c @@ -377,9 +377,13 @@ int rvu_mbox_handler_cpt_lf_alloc(struct rvu *rvu, rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf), val); - /* Set CPT LF NIX_PF_FUNC and SSO_PF_FUNC */ - val = (u64)req->nix_pf_func << 48 | - (u64)req->sso_pf_func << 32; + /* Set CPT LF NIX_PF_FUNC and SSO_PF_FUNC. EXE_LDWB is set + * on reset. + */ + val = rvu_read64(rvu, blkaddr, CPT_AF_LFX_CTL2(cptlf)); + val &= ~(GENMASK_ULL(63, 48) | GENMASK_ULL(47, 32)); + val |= ((u64)req->nix_pf_func << 48 | + (u64)req->sso_pf_func << 32); rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL2(cptlf), val); } @@ -389,7 +393,7 @@ int rvu_mbox_handler_cpt_lf_alloc(struct rvu *rvu, static int cpt_lf_free(struct rvu *rvu, struct msg_req *req, int blkaddr) { u16 pcifunc = req->hdr.pcifunc; - int num_lfs, cptlf, slot; + int num_lfs, cptlf, slot, err; struct rvu_block *block; block = &rvu->hw->block[blkaddr]; @@ -403,10 +407,15 @@ static int cpt_lf_free(struct rvu *rvu, struct msg_req *req, int blkaddr) if (cptlf < 0) return CPT_AF_ERR_LF_INVALID; - /* Reset CPT LF group and priority */ - rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL(cptlf), 0x0); - /* Reset CPT LF NIX_PF_FUNC and SSO_PF_FUNC */ - rvu_write64(rvu, blkaddr, CPT_AF_LFX_CTL2(cptlf), 0x0); + /* Perform teardown */ + rvu_cpt_lf_teardown(rvu, pcifunc, blkaddr, cptlf, slot); + + /* Reset LF */ + err = rvu_lf_reset(rvu, block, cptlf); + if (err) { + dev_err(rvu->dev, "Failed to reset blkaddr %d LF%d\n", + block->addr, cptlf); + } } return 0; @@ -850,15 +859,10 @@ static void cpt_lf_disable_iqueue(struct rvu *rvu, int blkaddr, int slot) dev_warn(rvu->dev, "CPT FLR hits hard loop counter\n"); } -int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int lf, int slot) +int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf, int slot) { - int blkaddr; u64 reg; - blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_CPT, pcifunc); - if (blkaddr != BLKADDR_CPT0 && blkaddr != BLKADDR_CPT1) - return -EINVAL; - /* Enable BAR2 ALIAS for this pcifunc. */ reg = BIT_ULL(16) | pcifunc; rvu_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, reg); From patchwork Wed Oct 13 05:56:21 2021 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Srujana Challa X-Patchwork-Id: 12554897 X-Patchwork-Delegate: kuba@kernel.org Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id AA2E2C433EF for ; Wed, 13 Oct 2021 05:56:42 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 7F58F6103C for ; Wed, 13 Oct 2021 05:56:42 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230514AbhJMF6o (ORCPT ); Wed, 13 Oct 2021 01:58:44 -0400 Received: from mx0b-0016f401.pphosted.com ([67.231.156.173]:61348 "EHLO mx0b-0016f401.pphosted.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229582AbhJMF6n (ORCPT ); Wed, 13 Oct 2021 01:58:43 -0400 Received: from pps.filterd (m0045851.ppops.net [127.0.0.1]) by mx0b-0016f401.pphosted.com (8.16.1.2/8.16.1.2) with SMTP id 19CMf8dP022202; Tue, 12 Oct 2021 22:56:38 -0700 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=marvell.com; h=from : to : cc : subject : date : message-id : in-reply-to : references : mime-version : content-transfer-encoding : content-type; s=pfpt0220; bh=UT2AKkpYFdv7/b0livE40KmCFpIaPORcqQCn4eEjClY=; b=PnsGqXcDIlkqoNTY8vTI76LRl4vYrOMKrlpg3NaVvyMPIVkIdsoMVdstAAxI2a/y4Z/3 FOi5lWNDePccBWJOlkm39bP0tUmZHCiW62bi29tFPUOCKnoeXdRwGgwrR7L3dWBVMYxR avb5RgxLAUJUoFJKSaow3ZnpPTlvaheYSf9zLWFtpL7m+/3CdqOZIpLK5w60dagXoUNm TIDCbq4zyPcm2Wzz/ZRSyoyPyMancJbCvaJBKqDCKri79pDvxaMdELx8XDH0i33f0H77 GnwGgRr9UEd5A2bEEWtYGD9bC+Cs9O0eB1attnebW+pCHZIG6Vdunxu65R0yrJJsozz9 8g== Received: from dc5-exch01.marvell.com ([199.233.59.181]) by mx0b-0016f401.pphosted.com with ESMTP id 3bnkcchhpw-1 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-SHA384 bits=256 verify=NOT); Tue, 12 Oct 2021 22:56:38 -0700 Received: from DC5-EXCH02.marvell.com (10.69.176.39) by DC5-EXCH01.marvell.com (10.69.176.38) with Microsoft SMTP Server (TLS) id 15.0.1497.18; Tue, 12 Oct 2021 22:56:36 -0700 Received: from maili.marvell.com (10.69.176.80) by DC5-EXCH02.marvell.com (10.69.176.39) with Microsoft SMTP Server id 15.0.1497.18 via Frontend Transport; Tue, 12 Oct 2021 22:56:36 -0700 Received: from localhost.localdomain (unknown [10.28.36.175]) by maili.marvell.com (Postfix) with ESMTP id 9F6773F708A; Tue, 12 Oct 2021 22:56:33 -0700 (PDT) From: Srujana Challa To: , , CC: , , , , , , Subject: [PATCH v2 net-next 3/3] octeontx2-af: Add support to flush full CPT CTX cache Date: Wed, 13 Oct 2021 11:26:21 +0530 Message-ID: <20211013055621.1812301-4-schalla@marvell.com> X-Mailer: git-send-email 2.25.1 In-Reply-To: <20211013055621.1812301-1-schalla@marvell.com> References: <20211013055621.1812301-1-schalla@marvell.com> MIME-Version: 1.0 X-Proofpoint-ORIG-GUID: qkGOIjilqWm29ayM-Ax8iY0I3ZrMm5_B X-Proofpoint-GUID: qkGOIjilqWm29ayM-Ax8iY0I3ZrMm5_B X-Proofpoint-Virus-Version: vendor=baseguard engine=ICAP:2.0.182.1,Aquarius:18.0.790,Hydra:6.0.425,FMLib:17.0.607.475 definitions=2021-10-13_01,2021-10-13_01,2020-04-07_01 Precedence: bulk List-ID: X-Mailing-List: netdev@vger.kernel.org X-Patchwork-Delegate: kuba@kernel.org Adds support to flush or invalidate CPT CTX entries as part of FLR and also provides a mailbox to flush CPT CTX entries in case of graceful exit. This patch also adds support for AF -> CPT PF uplink mailbox messages and adds a new mbox message to submit a CPT instruction from AF. Signed-off-by: Srujana Challa --- .../net/ethernet/marvell/octeontx2/af/mbox.h | 12 + .../net/ethernet/marvell/octeontx2/af/rvu.h | 1 + .../ethernet/marvell/octeontx2/af/rvu_cpt.c | 206 ++++++++++++++++++ .../ethernet/marvell/octeontx2/af/rvu_nix.c | 11 + .../ethernet/marvell/octeontx2/af/rvu_reg.h | 2 + 5 files changed, 232 insertions(+) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h index dfe487235007..4e79e918a161 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/mbox.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/mbox.h @@ -191,6 +191,7 @@ M(CPT_INLINE_IPSEC_CFG, 0xA04, cpt_inline_ipsec_cfg, \ M(CPT_STATS, 0xA05, cpt_sts, cpt_sts_req, cpt_sts_rsp) \ M(CPT_RXC_TIME_CFG, 0xA06, cpt_rxc_time_cfg, cpt_rxc_time_cfg_req, \ msg_rsp) \ +M(CPT_CTX_CACHE_SYNC, 0xA07, cpt_ctx_cache_sync, msg_req, msg_rsp) \ /* SDP mbox IDs (range 0x1000 - 0x11FF) */ \ M(SET_SDP_CHAN_INFO, 0x1000, set_sdp_chan_info, sdp_chan_info_msg, msg_rsp) \ M(GET_SDP_CHAN_INFO, 0x1001, get_sdp_chan_info, msg_req, sdp_get_chan_info_msg) \ @@ -292,10 +293,14 @@ M(NIX_BANDPROF_GET_HWINFO, 0x801f, nix_bandprof_get_hwinfo, msg_req, \ #define MBOX_UP_CGX_MESSAGES \ M(CGX_LINK_EVENT, 0xC00, cgx_link_event, cgx_link_info_msg, msg_rsp) +#define MBOX_UP_CPT_MESSAGES \ +M(CPT_INST_LMTST, 0xD00, cpt_inst_lmtst, cpt_inst_lmtst_req, msg_rsp) + enum { #define M(_name, _id, _1, _2, _3) MBOX_MSG_ ## _name = _id, MBOX_MESSAGES MBOX_UP_CGX_MESSAGES +MBOX_UP_CPT_MESSAGES #undef M }; @@ -1562,6 +1567,13 @@ struct cpt_rxc_time_cfg_req { u16 active_limit; }; +/* Mailbox message request format to request for CPT_INST_S lmtst. */ +struct cpt_inst_lmtst_req { + struct mbox_msghdr hdr; + u64 inst[8]; + u64 rsvd; +}; + struct sdp_node_info { /* Node to which this PF belons to */ u8 node_id; diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h index 75aa0b8cfe58..66e45d733824 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu.h @@ -817,6 +817,7 @@ int rvu_cpt_register_interrupts(struct rvu *rvu); void rvu_cpt_unregister_interrupts(struct rvu *rvu); int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf, int slot); +int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc); /* CN10K RVU */ int rvu_set_channels_base(struct rvu *rvu); diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c index 7bab20403bba..45357deecabb 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_cpt.c @@ -795,6 +795,58 @@ int rvu_mbox_handler_cpt_rxc_time_cfg(struct rvu *rvu, return 0; } +int rvu_mbox_handler_cpt_ctx_cache_sync(struct rvu *rvu, struct msg_req *req, + struct msg_rsp *rsp) +{ + return rvu_cpt_ctx_flush(rvu, req->hdr.pcifunc); +} + +static void cpt_rxc_teardown(struct rvu *rvu, int blkaddr) +{ + struct cpt_rxc_time_cfg_req req; + int timeout = 2000; + u64 reg; + + if (is_rvu_otx2(rvu)) + return; + + /* Set time limit to minimum values, so that rxc entries will be + * flushed out quickly. + */ + req.step = 1; + req.zombie_thres = 1; + req.zombie_limit = 1; + req.active_thres = 1; + req.active_limit = 1; + + cpt_rxc_time_cfg(rvu, &req, blkaddr); + + do { + reg = rvu_read64(rvu, blkaddr, CPT_AF_RXC_ACTIVE_STS); + udelay(1); + if (FIELD_GET(RXC_ACTIVE_COUNT, reg)) + timeout--; + else + break; + } while (timeout); + + if (timeout == 0) + dev_warn(rvu->dev, "Poll for RXC active count hits hard loop counter\n"); + + timeout = 2000; + do { + reg = rvu_read64(rvu, blkaddr, CPT_AF_RXC_ZOMBIE_STS); + udelay(1); + if (FIELD_GET(RXC_ZOMBIE_COUNT, reg)) + timeout--; + else + break; + } while (timeout); + + if (timeout == 0) + dev_warn(rvu->dev, "Poll for RXC zombie count hits hard loop counter\n"); +} + #define INPROG_INFLIGHT(reg) ((reg) & 0x1FF) #define INPROG_GRB_PARTIAL(reg) ((reg) & BIT_ULL(31)) #define INPROG_GRB(reg) (((reg) >> 32) & 0xFF) @@ -863,6 +915,9 @@ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf, int s { u64 reg; + if (is_cpt_pf(rvu, pcifunc) || is_cpt_vf(rvu, pcifunc)) + cpt_rxc_teardown(rvu, blkaddr); + /* Enable BAR2 ALIAS for this pcifunc. */ reg = BIT_ULL(16) | pcifunc; rvu_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, reg); @@ -878,3 +933,154 @@ int rvu_cpt_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int lf, int s return 0; } + +#define CPT_RES_LEN 16 +#define CPT_SE_IE_EGRP 1ULL + +static int cpt_inline_inb_lf_cmd_send(struct rvu *rvu, int blkaddr, + int nix_blkaddr) +{ + int cpt_pf_num = get_cpt_pf_num(rvu); + struct cpt_inst_lmtst_req *req; + dma_addr_t res_daddr; + int timeout = 3000; + u8 cpt_idx; + u64 *inst; + u16 *res; + int rc; + + res = kzalloc(CPT_RES_LEN, GFP_KERNEL); + if (!res) + return -ENOMEM; + + res_daddr = dma_map_single(rvu->dev, res, CPT_RES_LEN, + DMA_BIDIRECTIONAL); + if (dma_mapping_error(rvu->dev, res_daddr)) { + dev_err(rvu->dev, "DMA mapping failed for CPT result\n"); + rc = -EFAULT; + goto res_free; + } + *res = 0xFFFF; + + /* Send mbox message to CPT PF */ + req = (struct cpt_inst_lmtst_req *) + otx2_mbox_alloc_msg_rsp(&rvu->afpf_wq_info.mbox_up, + cpt_pf_num, sizeof(*req), + sizeof(struct msg_rsp)); + if (!req) { + rc = -ENOMEM; + goto res_daddr_unmap; + } + req->hdr.sig = OTX2_MBOX_REQ_SIG; + req->hdr.id = MBOX_MSG_CPT_INST_LMTST; + + inst = req->inst; + /* Prepare CPT_INST_S */ + inst[0] = 0; + inst[1] = res_daddr; + /* AF PF FUNC */ + inst[2] = 0; + /* Set QORD */ + inst[3] = 1; + inst[4] = 0; + inst[5] = 0; + inst[6] = 0; + /* Set EGRP */ + inst[7] = CPT_SE_IE_EGRP << 61; + + /* Subtract 1 from the NIX-CPT credit count to preserve + * credit counts. + */ + cpt_idx = (blkaddr == BLKADDR_CPT0) ? 0 : 1; + rvu_write64(rvu, nix_blkaddr, NIX_AF_RX_CPTX_CREDIT(cpt_idx), + BIT_ULL(22) - 1); + + otx2_mbox_msg_send(&rvu->afpf_wq_info.mbox_up, cpt_pf_num); + rc = otx2_mbox_wait_for_rsp(&rvu->afpf_wq_info.mbox_up, cpt_pf_num); + if (rc) + dev_warn(rvu->dev, "notification to pf %d failed\n", + cpt_pf_num); + /* Wait for CPT instruction to be completed */ + do { + mdelay(1); + if (*res == 0xFFFF) + timeout--; + else + break; + } while (timeout); + + if (timeout == 0) + dev_warn(rvu->dev, "Poll for result hits hard loop counter\n"); + +res_daddr_unmap: + dma_unmap_single(rvu->dev, res_daddr, CPT_RES_LEN, DMA_BIDIRECTIONAL); +res_free: + kfree(res); + + return 0; +} + +#define CTX_CAM_PF_FUNC GENMASK_ULL(61, 46) +#define CTX_CAM_CPTR GENMASK_ULL(45, 0) + +int rvu_cpt_ctx_flush(struct rvu *rvu, u16 pcifunc) +{ + int nix_blkaddr, blkaddr; + u16 max_ctx_entries, i; + int slot = 0, num_lfs; + u64 reg, cam_data; + int rc; + + nix_blkaddr = rvu_get_blkaddr(rvu, BLKTYPE_NIX, pcifunc); + if (nix_blkaddr < 0) + return -EINVAL; + + if (is_rvu_otx2(rvu)) + return 0; + + blkaddr = (nix_blkaddr == BLKADDR_NIX1) ? BLKADDR_CPT1 : BLKADDR_CPT0; + + /* Submit CPT_INST_S to track when all packets have been + * flushed through for the NIX PF FUNC in inline inbound case. + */ + rc = cpt_inline_inb_lf_cmd_send(rvu, blkaddr, nix_blkaddr); + if (rc) + return rc; + + /* Wait for rxc entries to be flushed out */ + cpt_rxc_teardown(rvu, blkaddr); + + reg = rvu_read64(rvu, blkaddr, CPT_AF_CONSTANTS0); + max_ctx_entries = (reg >> 48) & 0xFFF; + + mutex_lock(&rvu->rsrc_lock); + + num_lfs = rvu_get_rsrc_mapcount(rvu_get_pfvf(rvu, pcifunc), + blkaddr); + if (num_lfs == 0) { + dev_warn(rvu->dev, "CPT LF is not configured\n"); + goto unlock; + } + + /* Enable BAR2 ALIAS for this pcifunc. */ + reg = BIT_ULL(16) | pcifunc; + rvu_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, reg); + + for (i = 0; i < max_ctx_entries; i++) { + cam_data = rvu_read64(rvu, blkaddr, CPT_AF_CTX_CAM_DATA(i)); + + if ((FIELD_GET(CTX_CAM_PF_FUNC, cam_data) == pcifunc) && + FIELD_GET(CTX_CAM_CPTR, cam_data)) { + reg = BIT_ULL(46) | FIELD_GET(CTX_CAM_CPTR, cam_data); + rvu_write64(rvu, blkaddr, + CPT_AF_BAR2_ALIASX(slot, CPT_LF_CTX_FLUSH), + reg); + } + } + rvu_write64(rvu, blkaddr, CPT_AF_BAR2_SEL, 0); + +unlock: + mutex_unlock(&rvu->rsrc_lock); + + return 0; +} diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c index 67feb26792e4..7761dcf17b91 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_nix.c @@ -4512,6 +4512,8 @@ int rvu_mbox_handler_nix_lf_stop_rx(struct rvu *rvu, struct msg_req *req, return rvu_cgx_start_stop_io(rvu, pcifunc, false); } +#define RX_SA_BASE GENMASK_ULL(52, 7) + void rvu_nix_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int nixlf) { struct rvu_pfvf *pfvf = rvu_get_pfvf(rvu, pcifunc); @@ -4519,6 +4521,7 @@ void rvu_nix_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int nixlf) int pf = rvu_get_pf(pcifunc); struct mac_ops *mac_ops; u8 cgx_id, lmac_id; + u64 sa_base; void *cgxd; int err; @@ -4575,6 +4578,14 @@ void rvu_nix_lf_teardown(struct rvu *rvu, u16 pcifunc, int blkaddr, int nixlf) nix_ctx_free(rvu, pfvf); nix_free_all_bandprof(rvu, pcifunc); + + sa_base = rvu_read64(rvu, blkaddr, NIX_AF_LFX_RX_IPSEC_SA_BASE(nixlf)); + if (FIELD_GET(RX_SA_BASE, sa_base)) { + err = rvu_cpt_ctx_flush(rvu, pcifunc); + if (err) + dev_err(rvu->dev, + "CPT ctx flush failed with error: %d\n", err); + } } #define NIX_AF_LFX_TX_CFG_PTP_EN BIT_ULL(32) diff --git a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h index dbaeb10de7c2..22cd751613cd 100644 --- a/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h +++ b/drivers/net/ethernet/marvell/octeontx2/af/rvu_reg.h @@ -527,6 +527,7 @@ #define CPT_AF_CTX_WBACK_LATENCY_PC (0x49448ull) #define CPT_AF_CTX_PSH_PC (0x49450ull) #define CPT_AF_CTX_PSH_LATENCY_PC (0x49458ull) +#define CPT_AF_CTX_CAM_DATA(a) (0x49800ull | (u64)(a) << 3) #define CPT_AF_RXC_TIME (0x50010ull) #define CPT_AF_RXC_TIME_CFG (0x50018ull) #define CPT_AF_RXC_DFRG (0x50020ull) @@ -544,6 +545,7 @@ #define CPT_LF_CTL 0x10 #define CPT_LF_INPROG 0x40 #define CPT_LF_Q_GRP_PTR 0x120 +#define CPT_LF_CTX_FLUSH 0x510 #define NPC_AF_BLK_RST (0x00040)