From patchwork Thu Feb 22 17:07:48 2024 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Sean Anderson X-Patchwork-Id: 13567622 X-Patchwork-Delegate: kuba@kernel.org Received: from out-173.mta1.migadu.com (out-173.mta1.migadu.com [95.215.58.173]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.subspace.kernel.org (Postfix) with ESMTPS id 6F92D1E488; Thu, 22 Feb 2024 17:08:03 +0000 (UTC) Authentication-Results: smtp.subspace.kernel.org; arc=none smtp.client-ip=95.215.58.173 ARC-Seal: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708621686; cv=none; b=pI/4tQ6+Q47qj8kZbvifGdZf33HAJ9n5AtkoqI+QyOgmuZkpB3yL1nSYt3dKNKftwF9WU9YpEoZnVU3255PrJjQxVwi+pf+1sBWxlier2LNGnlD4dFzBuojkFBVc+wfaFHScZT1jpog9iYzA4fLATmM26V4DsIRzUaBgXuPjFZo= ARC-Message-Signature: i=1; a=rsa-sha256; d=subspace.kernel.org; s=arc-20240116; t=1708621686; c=relaxed/simple; bh=bjbBKm61sS3OCiUiTraEzgDb0fORpdkoRq5LcJPn/Iw=; h=From:To:Cc:Subject:Date:Message-Id:MIME-Version; b=GV920N1iibJFs0Cfki9ssG2+LG7Le9BSjBt1zd34vyRp0E/KV92k9EckDIXYOvmbWbwBADL5gtOpQqiWSGZ7PY+2Val7eQ4DoSLztrQ6ww1yQDrSq47mqpfPj84t1y17Xc4OtbN4560gOgg3sCvlocYhaZth9VvkAYqdanzJDO4= ARC-Authentication-Results: i=1; smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev; spf=pass smtp.mailfrom=linux.dev; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b=v2hlfZd2; arc=none smtp.client-ip=95.215.58.173 Authentication-Results: smtp.subspace.kernel.org; dmarc=pass (p=none dis=none) header.from=linux.dev Authentication-Results: smtp.subspace.kernel.org; spf=pass smtp.mailfrom=linux.dev Authentication-Results: smtp.subspace.kernel.org; dkim=pass (1024-bit key) header.d=linux.dev header.i=@linux.dev header.b="v2hlfZd2" X-Report-Abuse: Please report any abuse attempt to abuse@migadu.com and include these headers. DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linux.dev; s=key1; t=1708621681; h=from:from:reply-to:subject:subject:date:date:message-id:message-id: to:to:cc:cc:mime-version:mime-version: content-transfer-encoding:content-transfer-encoding; bh=O7tlgS7MkdA1L6vxpIiDfXudireah7BPJ6Dyc7UeOKw=; b=v2hlfZd2C3xXw8cSD+mG/AWXt4cPww48KC+Yl2NE0ShrDnG7iKAcyA4P/30m+d3Wy1N4l7 1OvH3paUYLfUVPW1dT1KiweB4c9jOB3an6HIr+xD2LtIOzS1yXTReMa6UyBxkDxS4UOLMN Cnu4tKGVlc5mx/+HeM4pLas0mCEkWwg= From: Sean Anderson To: "David S . Miller" , Eric Dumazet , Jakub Kicinski , Paolo Abeni , netdev@vger.kernel.org Cc: linux-arm-kernel@lists.infradead.org, Claudiu Manoil , Vladimir Oltean , linuxppc-dev@lists.ozlabs.org, Steffen Trumtrar , linux-kernel@vger.kernel.org, Camelia Groza , Roy Pledge , Scott Wood , Li Yang , Sean Anderson , stable@vger.kernel.org Subject: [RESEND2 PATCH net v4 1/2] soc: fsl: qbman: Always disable interrupts when taking cgr_lock Date: Thu, 22 Feb 2024 12:07:48 -0500 Message-Id: <20240222170749.2607485-1-sean.anderson@linux.dev> Precedence: bulk X-Mailing-List: netdev@vger.kernel.org List-Id: List-Subscribe: List-Unsubscribe: MIME-Version: 1.0 X-Migadu-Flow: FLOW_OUT X-Patchwork-Delegate: kuba@kernel.org smp_call_function_single disables IRQs when executing the callback. To prevent deadlocks, we must disable IRQs when taking cgr_lock elsewhere. This is already done by qman_update_cgr and qman_delete_cgr; fix the other lockers. Fixes: 96f413f47677 ("soc/fsl/qbman: fix issue in qman_delete_cgr_safe()") CC: stable@vger.kernel.org Signed-off-by: Sean Anderson Reviewed-by: Camelia Groza Tested-by: Vladimir Oltean --- Resent from a non-mangling email. (no changes since v3) Changes in v3: - Change blamed commit to something more appropriate Changes in v2: - Fix one additional call to spin_unlock drivers/soc/fsl/qbman/qman.c | 10 +++++----- 1 file changed, 5 insertions(+), 5 deletions(-) diff --git a/drivers/soc/fsl/qbman/qman.c b/drivers/soc/fsl/qbman/qman.c index 739e4eee6b75..1bf1f1ea67f0 100644 --- a/drivers/soc/fsl/qbman/qman.c +++ b/drivers/soc/fsl/qbman/qman.c @@ -1456,11 +1456,11 @@ static void qm_congestion_task(struct work_struct *work) union qm_mc_result *mcr; struct qman_cgr *cgr; - spin_lock(&p->cgr_lock); + spin_lock_irq(&p->cgr_lock); qm_mc_start(&p->p); qm_mc_commit(&p->p, QM_MCC_VERB_QUERYCONGESTION); if (!qm_mc_result_timeout(&p->p, &mcr)) { - spin_unlock(&p->cgr_lock); + spin_unlock_irq(&p->cgr_lock); dev_crit(p->config->dev, "QUERYCONGESTION timeout\n"); qman_p_irqsource_add(p, QM_PIRQ_CSCI); return; @@ -1476,7 +1476,7 @@ static void qm_congestion_task(struct work_struct *work) list_for_each_entry(cgr, &p->cgr_cbs, node) if (cgr->cb && qman_cgrs_get(&c, cgr->cgrid)) cgr->cb(p, cgr, qman_cgrs_get(&rr, cgr->cgrid)); - spin_unlock(&p->cgr_lock); + spin_unlock_irq(&p->cgr_lock); qman_p_irqsource_add(p, QM_PIRQ_CSCI); } @@ -2440,7 +2440,7 @@ int qman_create_cgr(struct qman_cgr *cgr, u32 flags, preempt_enable(); cgr->chan = p->config->channel; - spin_lock(&p->cgr_lock); + spin_lock_irq(&p->cgr_lock); if (opts) { struct qm_mcc_initcgr local_opts = *opts; @@ -2477,7 +2477,7 @@ int qman_create_cgr(struct qman_cgr *cgr, u32 flags, qman_cgrs_get(&p->cgrs[1], cgr->cgrid)) cgr->cb(p, cgr, 1); out: - spin_unlock(&p->cgr_lock); + spin_unlock_irq(&p->cgr_lock); put_affine_portal(); return ret; }